We investigate the asymptotic properties of deep Residual networks (ResNets) as the number of layers increases. We first show the existence of scaling regimes for trained weights markedly different from those implicitly assumed in the neural ODE literature. We study the convergence of the hidden state dynamics in these scaling regimes, showing that one may obtain an ODE, a stochastic differential equation (SDE) or neither of these. In particular, our findings point to the existence of a diffusive regime in which the deep network limit is described by a class of stochastic differential equations (SDEs). Finally, we derive the corresponding scaling limits for the backpropagation dynamics.
translated by 谷歌翻译
We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We employ self-supervised representation learning via a training strategy that adapts off-the-shelf video features using a temporal module. Training implements self-supervised learning losses involving multiple cues such as appearance, motion and pose trajectories extracted from videos to learn generalizable representations. Our method extracts key steps via a tunable algorithm that clusters the representations extracted from procedural videos. We quantitatively evaluate our approach with key step localization and also demonstrate the effectiveness of the extracted representations on related downstream tasks like phase classification. Qualitative results demonstrate that the extracted key steps are meaningful to succinctly represent the procedural tasks.
translated by 谷歌翻译
Person recognition at a distance entails recognizing the identity of an individual appearing in images or videos collected by long-range imaging systems such as drones or surveillance cameras. Despite recent advances in deep convolutional neural networks (DCNNs), this remains challenging. Images or videos collected by long-range cameras often suffer from atmospheric turbulence, blur, low-resolution, unconstrained poses, and poor illumination. In this paper, we provide a brief survey of recent advances in person recognition at a distance. In particular, we review recent work in multi-spectral face verification, person re-identification, and gait-based analysis techniques. Furthermore, we discuss the merits and drawbacks of existing approaches and identify important, yet under explored challenges for deploying remote person recognition systems in-the-wild.
translated by 谷歌翻译
We address the problem of few-shot classification where the goal is to learn a classifier from a limited set of samples. While data-driven learning is shown to be effective in various applications, learning from less data still remains challenging. To address this challenge, existing approaches consider various data augmentation techniques for increasing the number of training samples. Pseudo-labeling is commonly used in a few-shot setup, where approximate labels are estimated for a large set of unlabeled images. We propose DiffAlign which focuses on generating images from class labels. Specifically, we leverage the recent success of the generative models (e.g., DALL-E and diffusion models) that can generate realistic images from texts. However, naive learning on synthetic images is not adequate due to the domain gap between real and synthetic images. Thus, we employ a maximum mean discrepancy (MMD) loss to align the synthetic images to the real images minimizing the domain gap. We evaluate our method on the standard few-shot classification benchmarks: CIFAR-FS, FC100, miniImageNet, tieredImageNet and a cross-domain few-shot classification benchmark: miniImageNet to CUB. The proposed approach significantly outperforms the stateof-the-art in both 5-shot and 1-shot setups on these benchmarks. Our approach is also shown to be effective in the zero-shot classification setup
translated by 谷歌翻译
我们提出了逐渐变化的辐射场(PDRF),这是一种从模糊图像中有效重建高质量辐射场的新方法。虽然当前的最先进的(SOTA)场景重建方法实现了光真实的渲染,因此清洁源视图会导致其性能在源视图受模糊影响的影响时会受到影响,这通常是野外图像的观察。以前的脱毛方法要么不考虑3D几何形状,要么是计算强度。为了解决这些问题,PDRF是Radiance Field建模中逐渐消除的方案,通过合并3D场景上下文来准确地模拟模糊。 PDRF进一步使用了有效的重要性采样方案,从而导致快速场景优化。具体而言,PDRF提出了一个粗射线渲染器,以快速估计体素密度和特征。然后,使用精细的体素渲染器来实现高质量的射线追踪。我们执行广泛的实验,并表明PDRF比以前的SOTA快15倍,同时在合成场景和真实场景上都取得更好的性能。
translated by 谷歌翻译
在呼吸运动下重建肺部锥体束计算机断层扫描(CBCT)是一个长期的挑战。这项工作更进一步,以解决一个具有挑战性的设置,以重建仅来自单个} 3D CBCT采集的多相肺图像。为此,我们介绍了对观点或Regas的概述综合。 Regas提出了一种自我监督的方法,以合成不足的层析成像视图并减轻重建图像中的混叠伪像。该方法可以更好地估计相间变形矢量场(DVF),这些矢量场(DVF)用于增强无合成的直接观察结果的重建质量。为了解决高分辨率4D数据上深神经网络的庞大记忆成本,Regas引入了一种新颖的射线路径变换(RPT),该射线路径转换(RPT)允许分布式,可区分的远期投影。 REGA不需要其他量度尺寸,例如先前的扫描,空气流量或呼吸速度。我们的广泛实验表明,REGA在定量指标和视觉质量方面的表现明显优于可比的方法。
translated by 谷歌翻译
本文提出了一种新的方法,使用未标记的语音数据进行无标记的神经网络(RNN) - 转换器(RNN-T)端到端(E2E)自动语音识别(ASR)系统进行无监督的微调和自我训练。传统系统使用未标记的音频数据时,使用ASR假设作为目标进行微调/自我训练,并且容易受到基本模型的ASR性能的影响。在这里,为了减轻使用未标记数据时ASR误差的影响,我们提出了多种假设的RNN-T损失,该损失将多个ASR 1最佳假设纳入损失函数中。对于微调任务,在LibrisPeech上进行的ASR实验表明,与test_other设置相比,与单类假设方法相比,多重肢体方法的相对降低可相对降低14.2%的单词错误率(WER)。对于自训练任务,使用来自华尔街日报(WSJ),Aurora-4的监督数据以及Chime-4真实嘈杂数据作为未标记的数据,对ASR模型进行了培训。与单障碍方法相比,多种假设方法在Chime-4的单渠道真实噪声评估集上相对减少了3.3%。
translated by 谷歌翻译
无监督和半监督的ML方法,例如变异自动编码器(VAE),由于其在分离的表述方面的能力以及找到具有复杂实验数据的潜在分类和回归的能力,因此在多个物理,化学和材料科学方面已广泛采用。 。像其他ML问题一样,VAE需要高参数调整,例如,平衡Kullback Leibler(KL)和重建项。但是,训练过程以及由此产生的歧管拓扑和连通性不仅取决于超参数,还取决于训练过程中的演变。由于在高维超参数空间中详尽搜索的效率低下,因此我们在这里探索了一种潜在的贝叶斯优化方法(ZBO)方法,用于用于无监督和半监测的ML的超参数轨迹优化,并证明了连接的ML,并证明VAE具有旋转不变。我们证明了这种方法的应用,用于寻找血浆纳米颗粒材料系统的MNIST和实验数据的联合离散和连续旋转不变表示。已广泛讨论了所提出的方法的性能,它允许对其他ML模型进行任何高维超参数调整或轨迹优化。
translated by 谷歌翻译
机器学习方法的最新进展以及扫描探针显微镜(SPMS)的可编程接口的新兴可用性使自动化和自动显微镜在科学界的关注方面推向了最前沿。但是,启用自动显微镜需要开发特定于任务的机器学习方法,了解物理发现与机器学习之间的相互作用以及完全定义的发现工作流程。反过来,这需要平衡领域科学家的身体直觉和先验知识与定义实验目标和机器学习算法的奖励,这些算法可以将它们转化为特定的实验协议。在这里,我们讨论了贝叶斯活跃学习的基本原理,并说明了其对SPM的应用。我们从高斯过程作为一种简单的数据驱动方法和对物理模型的贝叶斯推断作为基于物理功能的扩展的贝叶斯推断,再到更复杂的深内核学习方法,结构化的高斯过程和假设学习。这些框架允许使用先验数据,在光谱数据中编码的特定功能以及在实验过程中表现出的物理定律的探索。讨论的框架可以普遍应用于结合成像和光谱,SPM方法,纳米识别,电子显微镜和光谱法以及化学成像方法的所有技术,并且对破坏性或不可逆测量的影响特别影响。
translated by 谷歌翻译
Recently, contrastive learning attracts increasing interests in neural text generation as a new solution to alleviate the exposure bias problem. It introduces a sequence-level training signal which is crucial to generation tasks that always rely on auto-regressive decoding. However, previous methods using contrastive learning in neural text generation usually lead to inferior performance. In this paper, we analyse the underlying reasons and propose a new Contrastive Neural Text generation framework, CoNT. CoNT addresses bottlenecks that prevent contrastive learning from being widely adopted in generation tasks from three aspects -- the construction of contrastive examples, the choice of the contrastive loss, and the strategy in decoding. We validate CoNT on five generation tasks with ten benchmarks, including machine translation, summarization, code comment generation, data-to-text generation and commonsense generation. Experimental results show that CoNT clearly outperforms the conventional training framework on all the ten benchmarks with a convincing margin. Especially, CoNT surpasses previous the most competitive contrastive learning method for text generation, by 1.50 BLEU on machine translation and 1.77 ROUGE-1 on summarization, respectively. It achieves new state-of-the-art on summarization, code comment generation (without external data) and data-to-text generation.
translated by 谷歌翻译